Search Results: "erik"

10 November 2013

Jelmer Vernooij: The state of distributed bug trackers

A whopping 5 years ago, LWN ran a story about distributed bug trackers. This was during the early waves of distributed version control adoption, and so everybody was looking for other things that could benefit from decentralization. TL;DR: Not much has changed since. The potential benefits of a distributed bug tracker are similar to those of a distributed version control system: ability to fork any arbitrary project, easier collaboration between related projects and offline access to full project data. The article discussed a number of systems, including Bugs Everywhere, ScmBug, DisTract, DITrack, ticgit and ditz. The conclusion of our favorite grumpy editor at the time was that all of the available distributed bug trackers were still in their infancy. All of these piggyback on a version control system somehow - either by reusing the VCS database, by storing their data along with the source code in the tree, or by adding custom hooks that communicate with a central server. Only ScmBug had been somewhat widely deployed at the time, but its homepage gives me a blank page now. Of the trackers reviewed by LWN, Bugs Everywhere is the only one that is still around and somewhat active today. In the years since the article, a handful of new trackers have come along. Two new version control systems - Veracity and Fossil - come with the kitchen sink included and so feature a built-in bug tracker and wiki. There is an extension for Mercurial called Artemis that stores issues in an .issues directory that is colocated with the Mercurial repository. The other new tracker that I could find (though it has also not changed since 2009) is SD. It uses its own distributed database technology for storing bug data - called Prophet, and doesn't rely on a VCS. One of the nice features is that it supports importing bugs from foreign trackers. Some of these provide the benefits you would expect of a distributed bug tracker. Unfortunately, all those I've looked at fail to even provide the basic functionality I would want in a bug tracker. Moreso than with a version control system, regular users interact with a bug tracker. They report bugs, provide comments and feedback on fixes. All of the systems I tried make these actions a lot harder than with your average bugzilla or mantis instance - they provide a limited web UI or no web interface at all. Update: LWN later also publishe articles on on SD and on Fossil. Other interesting links are Eric Sink's article on distributed bug tracking (Erik works at Sourcegear who develop Veracity) and the dist-bugs mailing list.

28 September 2013

Jelmer Vernooij: Book Review: Bazaar Version Control

Packt recently published a book on Version Control using Bazaar written by Janos Gyerik. I was curious what the book was like, and they kindly provided me with a digital copy. The book is split into roughly five sections: an introduction to version control using Bazaar's main commands, an overview of the available workflows, some chapters on the available extensions and integration, some more advanced topics and finally, a quick introduction to programming using bzrlib. It is assumed the reader has no pre-existing knowledge about version control systems. The first chapters introduce the reader to the concept of revision history, branching and merging and finally collaboration. All concepts are first discussed in theory, and then demonstrated using the Bazaar command-line UI and the bzr-explorer tool. The book follows roughly the same track as the official documentation, but it is more extensive and has more fancy drawings of revision graphs. The middle section of the book discusses the modes in which Bazaar can be used - centralized or decentralized - as well as the various ways in which code can be landed in the main branch ("workflows"). The selection of workflows in the book is roughly the same as those in the official Bazaar documentation. The author briefly touches on a number of other software engineering topics such as code reviews, code formatting and automated testing, though not sufficiently to make it useful for people who are unfamiliar with these techniques. Both the official documentation and the book complicate things unnecessarily by listing every possible option. The next chapter is a basic howto on the use of Bazaar with various hosting solutions, such as Launchpad, Redmine and Trac. The Advanced Features chapter covers a wide range of obscure and less obscure features in Bazaar: uncommit, shelves, re-using working trees, lightweight checkouts, stacked branches, signing revisions and using e-mail hooks. The chapter on foreign version control system integration is a more extensive version of the public docs. It has some factual inaccuracies; in particular, it recommends the installation of a 2 year old buggy version of bzr-git. The last chapter provides quite a good introduction to the Bazaar APIs and plugin writing. It is a fair bit better than what is available publically. Overall, it's not a bad book but also not a huge step forward from the official documentation. I might recommend it to people who are interested in learning Bazaar and who do not have any experience with version control yet. Those who are already familiar with Bazaar or another version control system will not find much new. The book misses an opportunity by following the official documentation so closely. It has the same omissions and the same overemphasis on describing every possible feature. I had hoped to read more about Bazaar's data model, its file format and some of the common problems, such as parallel imports, format hell and slowness.

16 June 2013

Daniel Pocock: Monitoring with Ganglia: an O'Reilly community book project

I recently had the opportunity to contribute to an O'Reilly community book project, developing the book Monitoring with Ganglia in collaboration with other members of the Ganglia team

The project itself, as a community book, pays no royalties back to the contributors, as we have chosen to donate all proceeds to charity. People who contributed to the book include
Robert Alexander, Jeff Buchbinder, Frederiko Costa, Alex Dean, Dave Josephsen, Bernard Li, Matt Massie, Brad Nicholes, Peter Phaal and Vladimir Vuksan and we also had generous assistance from various members of the open source community who assisted in the review process. Ganglia itself started at University of California, Berkeley as an initiative of Matt Massie, for monitoring HPC cloud infrastructure My own contact with Ganglia only began in 2008 when I was offered the opportunity to work full-time on the enterprise-wide monitoring systems for a large investment bank. Ganglia had been chosen for this huge project due to it's small footprint, support for many platforms and it's ability to work on a heterogeneous network as well as providing dedicated features for the bank's HPC grid. This brings me to one important point about Ganglia: it's not just about HPC any more. While it is extremely useful for clusters, grids and clouds, it is also quite suitable for a mixed network of web servers, mail servers, databases and all the other applications you may find in a small business, education or ISP environment. Instantly up and running with packages One of the most compelling features, even for small sites with less than 10 nodes, is the ease of installation: install the packages on Debian, Ubuntu, Fedora, OpenCSW and some other platforms, and it just works. Ganglia nodes will find each other over multicast, instantly, no manual configuration changes necessary. On one of the nodes, the web interface must be installed for viewing the statistics. Dare I say it: it is so easy, you hardly even need the book for a small installation. Where the book is really compelling is if you have hundreds or thousands of nodes, if you want custom charts or custom metrics or anything else beyond just installing the package. If monitoring is more than 10% of your job, the book is probably a must-have. Excellent open source architecture Ganglia's simplicity is largely thanks to the way it leverages other open source projects such as Tobi Oetiker's RRDtool and PHP Anybody familiar with these tools will find Ganglia is particularly easy to work with and customise. Custom metrics: IO service times One of my own contributions to the project has been the creation of ganglia-modules-linux, some plugins for Linux-specific metrics and ganglia-modules-solaris providing some similar metrics for Solaris. These projects on github provide an excellent base for people to fork and implement their own custom metrics in C or C++ The book provides a more detailed account of how to work with the various APIs for Python, C/C++, gmetric (command line/shell scripts) and Java. The new web interface For people who had tried earlier versions of Ganglia (and for those people who installed versions < 3.3.0 and still haven't updated), the new web interface is a major improvement and well worth the effort to install. It is available on the most recent packages (for example, it is in Debian 7 (wheezy) but not in Debian 6.) It was originally promoted as a standalone project (code-named gweb2) but was adopted as the official Ganglia web interface around the release of Ganglia 3.3.0. This web page provides a useful overview of what has changed and here is the original release announcement.

21 March 2013

Jonas Smedegaard: What is FreedomBox? And when can I have one?

On wednesday 7. of November I gave a talk at EPFSUG in Brussels about FreedomBox. EPFSUG is an interest group of Free Software users working inside the European Parliament. The FreedomBox is a project to help non-geeks care about their personal privacy when online, same ways as geeks have practiced for a decade or more. The goal is a small, cheap physical box looking like and operating like an internet gateway or wifi router many are accustomed to nowadays - but with three additions: "Privacy" is commonly mistaken as "secrecy": Privacy is to keep you in control of your information - you can then use that control to keep secrets from others or to share with others as you like. Talking about personal privacy in the European Parliament can seem a bit of a stretch. It is not a home but a (huge) office space, where it is perhaps less obvious that you should treat some information as personal, or that you are even allowed to do so. Try watch the videos, and if you have questions then please don't hesitate to get in touch with me about them. At the EPFSUG meeting was also a presentation by an emplyee who had succesfully installed and used Free Software at the internal network of the Parliament, but then later told that it wasn't allowed - because the IT staff need to be in control of your computer activities at the place! The head of IT services at the Parliament attended the meeting and gave a short improvised talk at the end, expressing positive interest in behalf of the established IT services at the place towards our grassroots activities - and even praising explicitly EPFSUG as being the ideal place for all EU citizens to ask questions about Free Software in relation to the European Parliament. See for yourself in the video of Giancarlo! Thanks a lot especially to Erik Josefsson for making this event reality! Slides, sources and videos from the event.

27 February 2013

Sylvain Le Gall: planet.ocaml.org spring cleaning

Hi planet.ocaml.org. Just a quick post to thanks Marek Kubica for his help on the planet.ocaml.org spring cleaning. Here are the feeds that have been removed: http://redlizards.com/blog/feed/?tag=ocaml http://blog.mestan.fr/feed/?cat=16 http://www.sairyx.org/tag/ocaml/feed/ http://blog.dbpatterson.com/rss http://www.nicollet.net/toroidal/ocaml/feed/ http://ocamlhackers.ning.com/profiles/blog/feed?tag=ocaml&xn_auth=no http://eigenclass.org/R2/feeds/rss2/all http://procrastiblog.com/category/ocaml/feed http://savonet.sourceforge.net/liquidsoap.rss Here is the feed that have been added: http://newblog.0branch.com/rss.xml Here are the feeds that have been updated: https://ocaml.janestreet.com/?q=rss.xml http://scattered-thoughts.net/atom.xml http://www.rktmb.org/feed/tag/ocaml/atom http://nleyten.com/feed/tag/ocaml/atom http://www.mega-nerd.com/erikd/Blog/index.rss20 http://y-node.com/blog/feeds/latest/ If you want that we had back your blog, please follow the howto add your feed to planet. We didn't have removed feed on purpose, this was just a way to get rid of a lot of 404, And don't forget, planet.ocamlcore.org is now served by planet.ocaml.org! Update your feed reader.

18 October 2012

Jonas Smedegaard: Blended configuration

I want all my computing environments "furnished" similarly and as close to my personal taste as possible - be it big and small machines, self-administered and user accounts of others. and not only for myself - similar is needed for the environments I help maintain for my friends Siri, Erik and Peter. The environments are seldom fully equal, so simply copying things around is rarely useful. Here's a checklist of things that I need to customize and syncronize for me and my friends to feel at $HOME : Above is work-in-progress. I hope to later extend with more details - probably by linking to separate pages about each subtopic.

1 July 2012

Ingo Juergensmann: 100% CPU load due to Leap Second

This morning Gregor Samsa woke up... oh, pardon! This morning I woke up and found myself puzzled, because my home server was eating up all of my 4 cores CPU cycles. Especially mysqld was high on CPU load. 100% CPU load for the mysql-server instance and 100% CPU load for akonadiservers own mysqld instance. Restarting KDE and mysql-server didn't help on my Debian unstable machine. Next step was upgrading the system. Sometimes this helps indeed, but not today. Looking at bugs.debian.org for mysql-server didn't reveal any help as well. So my next logical step was to ask on #debian-devel in IRC. And my question was very quick answered:
11:28 < ij> since tonight I've got two mysqld processes running at 100% CPU, one spawned by akonadi and
the other is the mysqld from mysql-server (unstable that is). is this an already known issue?
haven't found anything on b.d.o for mysql-server, though
11:29 < mrvn> ij: topic
11:29 < mrvn> you need to set the time
11:30 < ij> waaaaah!
11:30 < mrvn> ij: indeed.
The topic was at that time:
100% CPU? Reset leap second http://xrl.us/bnde4w
So, it was caused by the leap second. Although you might suspect mysql doing some nasty things (which, IMHO, is always a good guess ;)), the issue is this time within the Linux kernel itself, as a commit on git.kernel.org clearifies. To fix this issue you need to set the time manually using the following command or just reboot:
date -s " date "
So far I found these applications being hit by this kernel bug:
  • mysql-server
  • akonadi (as it uses own mysql instances)
  • Firefox
  • Openfire Jabber server (because it's using Java, which seems to trigger the problem as mysql does)
  • Virtualbox' VBoxSVC process
  • puppetmaster from package puppet, reported by Michael
  • mythfrontend, reported by pos on #debian-devel
  • Jetty, Hudson, Puppet agent and master, reported by Christian
  • milter-greylist, reported by E. Recio
  • dovecot, reported by Diogo Resende
  • Google Chrome, reported by Erik B. Andersen
  • if you find more apps, please comment and I'll include them here...
So, hope this helps and many thanks to mrvn and infinity on #debian-devel for the help!
Kategorie:

26 June 2012

Ingo Juergensmann: Confusion about mkfs.xfs and log stripe size being too big

Recently I bought some new disks, placed them into my computer, and built a RAID5 on these 3x 4 TB disks. Creating a physical device (PV) with pvcreate, a volume group (VG) with vgcreate and some logical volumes (LV) with lvcreate was as easy and well-known as creating an XFS filesystem on the LVs... but something was strange! I never saw this message before, when creating XFS filesystems with mkfs.xfs:
log stripe unit (524288 bytes) is too large (maximum is 256KiB)
log stripe unit adjusted to 32KiB
Usually I don't mess around with the parameters of mkfs.xfs, because mkfs.xfs is smart enough to find near to optimal parameters for your filesystem. But apparently mkfs.xfs wanted to use a log stripe unit of 512 kiB, although its maximum size for this is 256 kiB. Why? So I started to google and in parallel asked on #xfs@freenode. Erik Sandeen, one of the core developers of XFS, suggested that I write that issue to the mailing list. He did already face this issue himself, but couldn't remember details. So I collected some more information about my setup and wrote to the XFS ML. Of course I included information about my RAID5 setup:
muaddib:/home/ij# mdadm --detail /dev/md7
/dev/md7:
Version : 1.2
Creation Time : Sun Jun 24 14:58:21 2012
Raid Level : raid5
Array Size : 7811261440 (7449.40 GiB 7998.73 GB)
Used Dev Size : 3905630720 (3724.70 GiB 3999.37 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent Update Time : Tue Jun 26 05:13:03 2012
State : active, resyncing
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0 Layout : left-symmetric
Chunk Size : 512K Resync Status : 98% complete Name : muaddib:7 (local to host muaddib)
UUID : b56a714c:d193231e:365e6297:2ca61b65
Events : 16 Number Major Minor RaidDevice State
0 8 52 0 active sync /dev/sdd4
1 8 68 1 active sync /dev/sde4
2 8 84 2 active sync /dev/sdf4
Apparently, mkfs.xfs takes the chunk size of the RAID5 and want to use this for its log stripe size setting. So, that's the explanation why mkfs.xfs wants to use 512 kiB, but why is the chunk size 512 kiB at all? I didn't messed around with chunk sizes when creating the RAID5 either and all of my other RAIDs are using chunk sizes of 64 kiB. The reason was quickly found: the new RAID5 has a 1.2 format superblock, whereas the older ones do have a 0.90 format superblock. So, it seems that somewhen the default setting in mdadm, which superblock format is to be used for its metadata, has been changed. I asked on #debian.de@ircnet and someone answered that this was changed in Debian after release of Squeeze. Even in Squeeze the 0.90 format superblock was obsolete and has been only kept for backward compatibility. Well, ok. There actually was a change of defaults, which explains the behaviour of mkfs.xfs now, wanting to set log stripe size to 512 kiB. But what is the impact of falling back to 32 kiB log stripe size? Dave Chinner, another XFS developer explains:
Best thing in general is to align all log writes to the
underlying stripe unit of the array. That way as multiple frequent
log writes occur, it is guaranteed to form full stripe writes and
basically have no RMW overhead. 32k is chosen by default because
that's the default log buffer size and hence the typical size of
log writes.

If you increase the log stripe unit, you also increase the minimum
log buffer size that the filesystem supports. The filesystem can
support up to 256k log buffers, and hence the limit on maximum log
stripe alignment.
And in another mail, when being asked if it's possible to raise the 256 kiB limit to 512 kiB because of the mdadm defaults to 512 kiB as well:
You can't, simple as that. The maximum supported is 256k. As it is,
a default chunk size of 512k is probably harmful to most workloads -
large chunk sizes mean that just about every write will trigger a
RMW cycle in the RAID because it is pretty much impossible to issue
full stripe writes. Writeback doesn't do any alignment of IO (the
generic page cache writeback path is the problem here), so we will
lamost always be doing unaligned IO to the RAID, and there will be
little opportunity for sequential IOs to merge and form full stripe
writes (24 disks @ 512k each on RAID6 is a 11MB full stripe write).

IOWs, every time you do a small isolated write, the MD RAID volume
will do a RMW cycle, reading 11MB and writing 12MB of data to disk.
Given that most workloads are not doing lots and lots of large
sequential writes this is, IMO, a pretty bad default given typical
RAID5/6 volume configurations we see....
So, reducing the log stripe size is in fact a good thing[TM]. If anyone will benefit from larger log stripe sizes, s/he would be knowledgeable enough to play around with mkfs.xfs parameters and tune them to needs of the workload. Erik Sandeen suggested, though, to remove the warning in mkfs.xfs. Dave objects and maybe it's a good compromise to extend the warning by giving an URL for a FAQ entry explaining this issue in more depth than a short warning can do? Maybe someone else is facing the same issue and searches for information and find this blog entry helpful in the meantime...
Kategorie:

3 March 2012

Petter Reinholdtsen: Stopmotion for making stop motion animations on Linux - reloaded

Many years ago, the Skolelinux / Debian Edu project initiated a student project to create a tool for making stop motion movies. The proposal came from a teacher needing such tool on Skolelinux. The project, called "stopmotion", was manned by two extraordinary students and won a school award and a national aware with this great project. The project was initiated and mentored by Herman Robak, and manned by the students Bj rn Erik Nilsen and Fredrik Berg Kj lstad. They got in touch with people at Aardman Animation studio and received feedback on how professionals would like such stopmotion tool to work, and the end result was and is used by animators around the globe. But as is usual after studying, both got jobs and went elsewhere, and did not have time to properly tend to the project, and it has been lingering for a few years now. Until last year... Last year some of the users got together with Herman, and moved the project to Sourceforge and in effect restarted the project under a new name, linuxstopmotion. The name change was done to make it possible to find the project using Internet search engines (try to search for 'stopmotion' to see what I mean). I've been following the mailing list and the improvement already in place and planned for the future is encouraging. If you want to make stop motion movies. Check it out. :)

21 November 2011

Siegfried Gevatter: Debian Games Team Meeting

This announcement was provided by Martin Erik Werner. I m reproducing it for Planet Ubuntu. The Debian/Ubuntu Games Team is organizing another meeting. If you re into developing and/or packaging of games, or just generally curious about games in Debian/Ubuntu, you should join! It will be held next Saturday, the 26th of November, in the #debian-games channel on irc.debian.org (also know as irc.oftc.net) at 10:00 UTC. More information is available on the wiki page Games/Meetings/2011-11-26. The agenda starts off with the usual round of introductions, so if you re new to the Team, say hi! Then we ll be going through the action items from the last meeting, including work on the Debian Games LiveCD, and what s up with the /usr/games/ path anyways? Next we ll be moving onto how the Games Team is faring in terms of members: are new recruits finding it comfortable, should we advertise more? Next up it s the squeeky penguin: Wheezy is somewhere in the not-completely-distant future, how does that affect the Games Team, should we be scuffling to get specific tasks done? Then onto the recurring question of Sponsoring, and how to improve it, should we be utilising DebExpo more? What about our favourite PET? Lastly, PlayDeb is doing some really neat stuff, would it make sense for our team to push some changes to PlayDeb? Would it make sense for PlayDeb to push changes to Debian Games? Hopes are for a good discussion, and a merry time, hope to see you all there! Related posts:
  1. Debious A dubious Debian packaging GUI
  2. One week with Debian
  3. A list of some commercial GNU/Linux games

No comments
Siegfried-Angel Gevatter Pujals, 2011. Permalink License Post tags: , ,

flattr this!

Paul Wise: Debian/Ubuntu games team meeting #6

The Debian/Ubuntu games team is organizing another meeting, if you're into developing and/or packaging of games, or just generally curious about games in Debian/Ubuntu, you should join! It will be held next Saturday, on the 26th of November, in the #debian-games channel on irc.debian.org (also know as irc.oftc.net). More information is available on the meeting wiki page. The agenda starts off with the usual round of introductions, so if you're new to the team, say hi! Then we'll be going through the action items from the last meeting, including work on the Debian Games LiveCD, and what's up with the /usr/games/ path anyways? Next we'll be moving onto how the games team is faring in terms of members: are new recruits finding it comfortable, should we advertise more? Next up it's the squeeky penguin: Wheezy is somewhere in the not-completely-distant future, how does that affect the games team, should we be scuffling to get specific tasks done? Then onto the recurring question of sponsoring, and how to improve it, should we be utilising DebExpo more? What about our favourite PET? Lastly, PlayDeb is doing some really neat stuff, would it make sense for our team to push some changes to PlayDeb? Would it make sense for PlayDeb to push changes to Debian Games? Hopes are for a good discussion, and a merry time, hope to see you all there! (This text provided by Martin Erik Werner)

21 August 2011

Vincent Sanders: A year of entropy

It has been a couple of years now since the release of the Entropy Key Around a year ago we finally managed to have enough stock on hand that I obtained a real production unit and installed it in my border router.

I installed the Debian packages, configured the ekeyd into EGD server mode and installed the EGD client packages on my other machines and pretty much forgot about it.

The recent new release of the ekey host software (version 1.1.4) reminded me that I had been quietly collecting statistics for almost a whole year and had some munin graphs to share.

The munin graphs of the generated output is pretty dull. Aside from the minor efficiency improvement in the 1.1.3 release installed mid December the generated rate has been a flat 3.93 Kilobytes a second.
The temperature sensor on the Entropy key shows a good correlation with the on-board CPU thermal sensors within the host system.
The host border router/server is a busy box which provides most network services including secure LDAP and SSL web services, it shows no sign of not having enough entropy at any point in the year.
The sites main file server and compile engine is a 4 core 8 gigabyte system with 12 drives. This system is heavily used with high load almost all the time but without the EGD client running has almost no entropy available.
The next system is my personal workstation. This machine often gets rebooted and is usually turned off overnight which is why there are gaps in the graph and odd discontinuities. Nonetheless entropy is always available just like the rest of my systems ;-)
And almost as a "control" here is a file server on the same network which has not been running EGD client (Ok, Ok already it was misconfigured and I am an idiot ;-)
In conclusion it seems an entropy key can keep at least this small network completely filled up with all the entropy it needs without much fuss. YAY!

21 April 2011

Timo Jyrinki: Free Society Conference and Nordic Summit (FSCONS 2010)

Just a note that the slides are available (non-slideshare link) for my presentation Tuning an old but free phone (description) that I held in the tremendously great event FSCONS 2010. It could be described as a smaller scale FOSDEM, but that would be actually down-playing it since the free software effects on society are something that I've actually never seen elsewhere on such a scale. My talk was among the purely technical ones, though.

I was planning to hold on with this blog post until the recorded videos arrive, but since it seems it might not be during this year I will just post this now that slides are available.

I've shared a few photos as well at Flickr...


Keynote: Karin Kosina, The Inanna Project. A tech + art workshop for female artists in Damascus, Syria. An experiment in art, technology, and the transformative power of Free Hardware and Software.


Erik de Bruijn, The Future of RepRap, a self-replicating open source 3D printer that fabricates arbitrary objects including parts of itself.


Social event at the Berg 211.


Malin Nilsson on Gender, class and global flows. Using free software to fuel a revolution in home based industrial work.



Keynote: Glyn Moody, Ethics of Intellectual Monopolies.


Keynote: Glyn Moody, Ethics of Intellectual Monopolies (audience).

A few summaries available on a Qaiku seminar channel.

21 March 2011

Lars Wirzenius: Obnam version 0.15 and 0.15.1, and btree/larch 0.19

I've just made a couple of software releases: This is the first release of Obnam this year. Below is a summary of the changes. My apt repository has binary packages for amd64, i386, and armel. I've started mentoring someone to help me maintain the Debian packages, and hope to start uploading Obnam and its dependencies to Debian within a few weeks. Obnam is still in ALPHA, but the only missing thing for moving to BETA is encryption support. There's a bunch of bugs that need to be fixed, but I'm not going to block the BETA on them. I'd be very interested to hear any feedback on this version. Bugs fixed: Improvements and other changes:

22 December 2010

Petter Reinholdtsen: How to test if a laptop is working with Linux

The last few days I have spent at work here at the University of oslo testing if the new batch of computers will work with Linux. Every year for the last few years the university have organized shared bid of a few thousand computers, and this year HP won the bid. Two different desktops and five different laptops are on the list this year. We in the UNIX group want to know which one of these computers work well with RHEL and Ubuntu, the two Linux distributions we currently handle at the university. My test method is simple, and I share it here to get feedback and perhaps inspire others to test hardware as well. To test, I PXE install the OS version of choice, and log in as my normal user and run a few applications and plug in selected pieces of hardware. When something fail, I make a note about this in the test matrix and move on. If I have some spare time I try to report the bug to the OS vendor, but as I only have the machines for a short time, I rarely have the time to do this for all the problems I find. Anyway, to get to the point of this post. Here is the simple tests I perform on a new model. By now I suspect you are really curious what the test results are for the HP machines I am testing. I'm not done yet, so I will report the test results later. For now I can report that HP 8100 Elite work fine, and hibernation fail with HP EliteBook 8440p on Ubuntu Lucid, and audio fail on RHEL6. Ubuntu Maverik worked with 8440p. As you can see, I have most machines left to test. One interesting observation is that Ubuntu Lucid has almost twice the framerate than RHEL6 with glxgears. No idea why.

22 February 2010

Stefano Zacchiroli: RC bugs of the week - issue 22

RCBW - #22 With a mini-rush in the week-end, I'm now back on track to the weekly schedule of RCBW; here are this week's squashes: About this week highlights:

21 January 2010

Andrew Pollock: [life] Fetal MRI results

Yesterday we went back to Stanford for another ultrasound and a fetal MRI. We had pretty much the same gang doing the ultrasound as two weeks ago, so that was a nice bit of continuity. A paediatric radiologist came in at the end to take a look. She was the most confident of anyone that everything was going to be okay. She thought she could see something resembling the cavum septum pellucidum on the ultrasound. I think the ultrasound report said it was an "unusual shape" or something like that. After that, we got packed off for the fetal MRI. There was a bit of a wait, as there's only one MRI machine for the children's hospital, and the studies tend to take 30 to 45 minutes, but we eventually got in. I got to sit in the room with Sarah while they did the MRI. I was hoping to be able to sit in the control room instead, so I could look over their shoulder and see how it was all done. We both got earplugs because the machine is pretty noisy. It's not the hammering sound that they seem to go for on TV, it's more various different pitches of a horn. The radiologist told us she'd probably read the MRI later that night, as there'd be a bit of a backlog with the long weekend, and that we'd get a call today. Sarah got impatient this afternoon and called her obstetrician, and he called her back shortly afterwards saying he'd spoken to the radiologist and everything was fine. Exact specifics are not known at this time, but we'll be quizzing the obstetrician at our next appointment in a couple of weeks. Needless to say, we're both extremely relieved that everything is okay, and can scrub one thing off the list of things to have to worry about at the moment. Now we can just concentrate on trying to move house this weekend.

30 October 2009

Matt Brubeck: Compleat: Programmable Completion for Everyone

Compleat is an easy, declarative way to add smart tab-completion for any command-line program. For a quick description, see the README. For more explanation and a brief tutorial, keep reading... Background I'm one of those programmers who loves to carefully tailor my development environment. I do nearly all of my work at the shell or in a text editor, and I've spent a dozen years learning and customizing them to work more quickly and easily. Most experienced shell users know about programmable completion, which provides smart tab-completion for for supported programs like ssh and git. (If you are not familiar it, you really should install and enable bash-completion, or the equivalent package for your chosen shell.) You can also add your own completions for programs that aren't supported but in my experience, most users never bother. When I worked at Amazon, everyone used Zsh (which has a very powerful but especially baroque completion system) and shared the completion scripts they wrote for our myriad internal tools. Now that I'm in a startup with few other command line die-hards, I'm on my own when it comes to extending my shell. So I read the fine manual and started writing my own completions. Over on GitHub you can see the script I made for three commands from the Google Android SDK. It's 200 lines of shell code, fairly straightforward if you happen to be familiar with the Bash completion API. But as I cranked out more and more case statements, I felt there must be a better way... The Idea It's not hard to describe the usage of a typical command-line program. There's even a semi-standard format for it, used in man pages and generated by libraries like GNU AutoOpt. Here's one for android, one of the SDK commands supported by my script:
 android [--silent   --verbose]
   ( list [avd target]
     create avd ( --target <target>   --name <name>   --skin <name>
                   --path <file>   --sdcard <file>   --force ) ...
     move avd (--name <avd>   --rename <new>   --path <file>) ...
     (delete update) avd --name <avd>
     create project ( (--package --name --activity --path) <val>
                       --target <target> ) ...
     update project ((--name --path) <val>   --target <target>) ...
     update adb )
My idea: What if you could teach the shell to complete a program's arguments just by writing a usage description like this one? The Solution With Compleat, you can add completion for any command just by writing a usage description and saving it in a configuration folder. The ten-line description of the android command above generates the same results as my 76-line bash function, and it's so much easier to write and understand! The syntax should be familiar to long-time Unix users. Optional arguments are enclosed in square brackets; alternate choices are separated by vertical pipes. An ellipsis following an item means it may be repeated, and parentheses group several items into one. Words in angle brackets are parameters for the user to fill in. Let's look at some more features of the usage format. For programs with complicated arguments, it can be useful to break them down further. You can place alternate usages on their own lines separated by semicolons, like this:
android <opts> list [avd target];
android <opts> move avd (--name <avd> --rename <new> --path <file>)...;
android <opts> (delete update) avd --name <avd>;
...and so on. Rather than repeat the common options on every line, I used a parameter <opts>. I can define that parameter using the same usage syntax.
opts = [ --silent   --verbose ];
For parameters whose values are not fixed but can be computed by another program, we use a ! symbol followed by a shell command to generate completions, like this:
avd = ! android list avd   grep 'Name:'   cut -f2 -d: ;
target = ! android list target   grep '^id:'  cut -f2 -d' ' ;
And any parameter without a definition will just use the shell's built-in completion rules, which suggest matching filenames by default. The README file has more details of the usage syntax, and instructions for installing the software. Give it a try, and please send in any usage files that you want to share! (Questions, bug reports, or patches are also welcome.) Future Work For the next release of Compleat, I would like to make installation easier by providing better packaging and pre-compiled binaries; support zsh and other non-bash shells; and write better documentation. In the long term, I'm thinking about replacing the usage file interpreter with a compiler. The compiler would translate the usage file into shell code, or perhaps another language like C or Haskell. This would potentially improve performance (although speed isn't an issue right now on my development box), and would also make it easy for usage files to include logic written in the target language. Final Thoughts Recently I realized that parts of my work are so specialized that my parents and non-programmer friends will probably never really get them. For example, Compleat is a program to generate programs to help you... run programs? Sigh. Well, maybe someone out there will appreciate it. Compleat was my weekends/evenings/bus-rides project for the last few weeks (as you can see in the GitHub punch card), and my most fun side-project in quite a while. It's the first "real" program I've written in Haskell, though I've been experimenting with the language for a while. Now that I'm comfortable with it, I find that Haskell's particular combination of features works just right to enable quick exploratory programming, while giving a high level of confidence in the behavior of the resulting program. Compleat 1.0 is only 160 lines of Haskell, excluding comments and imports. Every module was completely rewritten at least once as I tried and compared different approaches. This is much less daunting when the code in question is only a couple dozen lines. I don't think this particular program would have been quite as easy to write at least for me in any of the other platforms I know (including Ruby, Python, Scheme, and C). I had the idea for Compleat more than a year ago, but at the time I did not know how to implement it easily. I quickly realized that what I wanted to write was a specialized parser generator, and a domain-specific language to go with it. Unfortunately I never took a compiler-design class in school, and had forgotten most of what I learned in my programming languages course. So I began studying parsing algorithms and language implementation, with Compleat as my ultimate goal. My good friend Josh and his Gazelle parser generator helped inspire me and point me toward other existing work. Compleat actually contains three parsers. The usage file parser and the input line tokenizer are built on the excellent Parsec library. The usage file is then translated into a parser that's built with my own simple set of parser combinators, which were inspired both by Parsec and by the original Monadic Parser Combinators paper by Graham Hutton and Erik Meijer. The simple evaluator for the usage DSL applies what I learned from Jonathan Tang's Write Yourself a Scheme in 48 Hours. And of course Real World Haskell was an essential resource for both the nuts and bolts and the design philosophy of Haskell. So besides producing a tool that will be useful to me and hopefully others, I also filled in a gap in my CS education, learned some great new languages and tools, and kindled an interest in several new (to me) research areas. It has also renewed my belief in the importance of "academic" knowledge to real engineering problems. (I've already come across at least one problem in my day job that I was able to solve faster by implementing a simple parser than I would have a year ago by fumbling with regexes.) And I'll be even happier if this inspires some friends or strangers to take a closer look at Haskell, Parsec, or any problem they've thought about and didn't know enough to solve. Yet.

28 January 2009

Russell Coker: Links January 2009

Jennifer 8 Lee gave an interesting TED talk about the spread and evolution of what is called Chinese food [1]. In that talk she compares McDonalds to Microsoft and Chinese restaurants to Linux. Her points comparing the different local variations of Chinese food to the variations of Linux make sense. The CentOS Plus repository has a kernel with support for the XFS filesystem, Postfix with MySQL support, and some other useful things [2]. Mary Gardiner comments about the recent loss of a blog server with all content [3]. One interesting point is that when you start using a service that maintains your data you should consider how to make personal backups in case the server goes away or you decide to stop being a customer. Val Henson makes some interesting points about the reliability of Solid State Disks (SSD) [4]. Some people are planning to replace RAID arrays of disks with a single SSD with the idea that a SSD will be more reliable, this seems like a bad idea. Also with the risk of corruption it seems that we have a greater need for filesystems that store block checksums. Lior Kaplan describes how to have multiple Linux bonding devices [5], the comment provides some interesting detail too. programmableweb.com has a set of links to sites that have APIs which can be used to create mashups [6]. One of the many things I would do if I had a lot more spare time is to play with some of the web APIs that are out there. Gunnar Wolf has written some insightful comments about the situation in Israel and Palestine [7]. He used to be a Zionist and spent some time living in Israel so he knows more about the topic than most commentators. Charles Stross has written an informative post about Ubuntu on the EeePC [8]. What is noteworthy about this is not that he s summarised the issues well, but that he is a well known science-fiction writer and he was responding to a SFWA member. One of his short stories is on my free short stories page [9]. He also wrote Accelerando which is one of the best sci-fi novels I ve read (and it s also free) [10]. Don Marti has written about Rent Seeking and proprietary software [11]. It s an interesting article, nothing really new for anyone who has followed the news about the coal and nuclear industries. Erik writes about The Setting Sun and points out that Scott McNealy had tried to capitalise on the SCO lawsuit but Red Hat has ended up beating them in the market [12].

13 January 2009

Ross Burton: GUPnP Repositories

Zeeshan created a clone of the GUPnP repository at Gitorious today, so to any contributors to GUPnP: feel free to clone the repository there so that we can all benefit from a distributed version control system being used as it should be. NP: Rendez-Vous (Mexico), Erik Truffaz featuring Murcof

Next.

Previous.